78 research outputs found

    Comparison between Long-Menu and Open-Ended Questions in computerized medical assessments. A randomized controlled trial

    Get PDF
    BACKGROUND: Long-menu questions (LMQs) are viewed as an alternative method for answering open-ended questions (OEQs) in computerized assessment. So far this question type and its influence on examination scores have not been studied sufficiently. However, the increasing use of computerized assessments will also lead to an increasing use of this question type. Using a summative online key feature (KF) examination we evaluated whether LMQs can be compared with OEQs in regard to the level of difficulty, performance and response times. We also evaluated the content for its suitability for LMQs. METHODS: We randomized 146 fourth year medical students into two groups. For the purpose of this study we created 7 peer-reviewed KF-cases with a total of 25 questions. All questions had the same content in both groups, but nine questions had a different answer type. Group A answered 9 questions with an LM type, group B with an OE type. In addition to the LM answer, group A could give an OE answer if the appropriate answer was not included in the list. RESULTS: The average number of correct answers for LMQs and OEQs showed no significant difference (p = 0.93). Among all 630 LM answers only one correct term (0.32%) was not included in the list of answers. The response time for LMQs did not significantly differ from that of OEQs (p = 0.65). CONCLUSION: LMQs and OEQs do not differ significantly. Compared to standard multiple-choice questions (MCQs), the response time for LMQs and OEQs is longer. This is probably due to the fact that they require active problem solving skills and more practice. LMQs correspond more suitable to Short answer questions (SAQ) then to OEQ and should only be used when the answers can be clearly phrased, using only a few, precise synonyms. LMQs can decrease cueing effects and significantly simplify the scoring in computerized assessment

    A new framework for designing programmes of assessment

    Get PDF
    Research on assessment in medical education has strongly focused on individual measurement instruments and their psychometric quality. Without detracting from the value of this research, such an approach is not sufficient to high quality assessment of competence as a whole. A programmatic approach is advocated which presupposes criteria for designing comprehensive assessment programmes and for assuring their quality. The paucity of research with relevance to programmatic assessment, and especially its development, prompted us to embark on a research project to develop design principles for programmes of assessment. We conducted focus group interviews to explore the experiences and views of nine assessment experts concerning good practices and new ideas about theoretical and practical issues in programmes of assessment. The discussion was analysed, mapping all aspects relevant for design onto a framework, which was iteratively adjusted to fit the data until saturation was reached. The overarching framework for designing programmes of assessment consists of six assessment programme dimensions: Goals, Programme in Action, Support, Documenting, Improving and Accounting. The model described in this paper can help to frame programmes of assessment; it not only provides a common language, but also a comprehensive picture of the dimensions to be covered when formulating design principles. It helps identifying areas concerning assessment in which ample research and development has been done. But, more importantly, it also helps to detect underserved areas. A guiding principle in design of assessment programmes is fitness for purpose. High quality assessment can only be defined in terms of its goals

    Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper

    Get PDF
    Background: Reliable and valid written tests of higher cognitive function are difficult to produce, particularly for the assessment of clinical problem solving. Modified Essay Questions (MEQs) are often used to assess these higher order abilities in preference to other forms of assessment, including multiple-choice questions (MCQs). MEQs often form a vital component of end-of-course assessments in higher education. It is not clear how effectively these questions assess higher order cognitive skills. This study was designed to assess the effectiveness of the MEQ to measure higher-order cognitive skills in an undergraduate institution. Methods: An analysis of multiple-choice questions and modified essay questions (MEQs) used for summative assessment in a clinical undergraduate curriculum was undertaken. A total of 50 MCQs and 139 stages of MEQs were examined, which came from three exams run over two years. The effectiveness of the questions was determined by two assessors and was defined by the questions ability to measure higher cognitive skills, as determined by a modification of Bloom's taxonomy, and its quality as determined by the presence of item writing flaws. Results: Over 50% of all of the MEQs tested factual recall. This was similar to the percentage of MCQs testing factual recall. The modified essay question failed in its role of consistently assessing higher cognitive skills whereas the MCQ frequently tested more than mere recall of knowledge. Conclusion: Construction of MEQs, which will assess higher order cognitive skills cannot be assumed to be a simple task. Well-constructed MCQs should be considered a satisfactory replacement for MEQs if the MEQs cannot be designed to adequately test higher order skills. Such MCQs are capable of withstanding the intellectual and statistical scrutiny imposed by a high stakes exit examination.Edward J Palmer, Peter G Devit

    Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures

    Get PDF
    Introduction Feedback after assessment is essential to support the development of optimal performance, but often fails to reach its potential. Although different assessment cultures have been proposed, the impact of these cultures on students’ receptivity to feedback is unclear. This study aimed to explore factors which aid or hinder receptivity to feedback. Methods Using a constructivist grounded theory approach, the authors conducted six focus groups in three medical schools, in three separate countries, with different institutional approaches to assessment, ranging from a traditional summative assessment structure to a fully implemented programmatic assessment system. The authors analyzed data iteratively, then identified and clarified key themes. Results Helpful and counterproductive elements were identified within each school’s assessment system. Four principal themes emerged. Receptivity to feedback was enhanced by assessment cultures which promoted students’ agency, by the provision of authentic and relevant assessment, and by appropriate scaffolding to aid the interpretation of feedback. Provision of grades and comparative ranking provided a helpful external reference but appeared to hinder the promotion of excellence. Conclusions This study has identified important factors emerging from different assessment cultures which, if addressed by programme designers, could enhance the learning potential of feedback following assessments. Students should be enabled to have greater control over assessment and feedback processes, which should be as authentic as possible. Effective long-term mentoring facilitates this process. The trend of curriculum change towards constructivism should now be mirrored in the assessment processes in order to enhance receptivity to feedback

    Barriers to the uptake and use of feedback in the context of summative assessment

    Get PDF
    Despite calls for feedback to be incorporated in all assessments, a dichotomy exists between formative and summative assessments. When feedback is provided in a summative context, it is not always used effectively by learners. In this study we explored the reasons for this. We conducted individual interviews with 17 students who had recently received web based feedback following a summative assessment. Constant comparative analysis was conducted for recurring themes. The summative assessment culture, with a focus on avoiding failure, was a dominant and negative influence on the use of feedback. Strong emotions were prevalent throughout the period of assessment and feedback, which reinforced the focus on the need to pass, rather than excel. These affective factors were heightened by interactions with others. The influence of prior learning experiences affected expectations about achievement and the need to use feedback. The summative assessment and subsequent feedback appeared disconnected from future clinical workplace learning. Socio-cultural influences and barriers to feedback need to be understood before attempting to provide feedback after all assessments. A move away from the summative assessment culture may be needed in order to maximise the learning potential of assessments
    corecore